63 research outputs found

    Recognizing Emotions in a Foreign Language

    Get PDF
    Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables

    Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring Systems

    Get PDF
    Learning involves a substantial amount of cognitive, social and emotional states. Therefore, recognizing and understanding these states in the context of learning is key in designing informed interventions and addressing the needs of the individual student to provide personalized education. In this paper, we explore the automatic detection of learner’s nonverbal behaviors involving hand-over-face gestures, head and eye movements and emotions via facial expressions during learning. The proposed computer vision-based behavior monitoring method uses a low-cost webcam and can easily be integrated with modern tutoring technologies. We investigate these behaviors in-depth over time in a classroom session of 40 minutes involving reading and problem-solving exercises. The exercises in the sessions are divided into three categories: an easy, medium and difficult topic within the context of undergraduate computer science. We found that there is a significant increase in head and eye movements as time progresses, as well as with the increase of difficulty level. We demonstrated that there is a considerable occurrence of hand-over-face gestures (on average 21.35%) during the 40 minutes session and is unexplored in the education domain. We propose a novel deep learning approach for automatic detection of hand-over-face gestures in images with a classification accuracy of 86.87%. There is a prominent increase in hand-over-face gestures when the difficulty level of the given exercise increases. The hand-over-face gestures occur more frequently during problem-solving (easy 23.79%, medium 19.84% and difficult 30.46%) exercises in comparison to reading (easy 16.20%, medium 20.06% and difficult 20.18%)

    Mood Induction in Depressive Patients: A Comparative Multidimensional Approach

    Get PDF
    Anhedonia, reduced positive affect and enhanced negative affect are integral characteristics of major depressive disorder (MDD). Emotion dysregulation, e.g. in terms of different emotion processing deficits, has consistently been reported. The aim of the present study was to investigate mood changes in depressive patients using a multidimensional approach for the measurement of emotional reactivity to mood induction procedures. Experimentally, mood states can be altered using various mood induction procedures. The present study aimed at validating two different positive mood induction procedures in patients with MDD and investigating which procedure is more effective and applicable in detecting dysfunctions in MDD. The first procedure relied on the presentation of happy vs. neutral faces, while the second used funny vs. neutral cartoons. Emotional reactivity was assessed in 16 depressed and 16 healthy subjects using self-report measures, measurements of electrodermal activity and standardized analyses of facial responses. Positive mood induction was successful in both procedures according to subjective ratings in patients and controls. In the cartoon condition, however, a discrepancy between reduced facial activity and concurrently enhanced autonomous reactivity was found in patients. Relying on a multidimensional assessment technique, a more comprehensive estimate of dysfunctions in emotional reactivity in MDD was available than by self-report measures alone and this was unsheathed especially by the mood induction procedure relying on cartoons. The divergent facial and autonomic responses in the presence of unaffected subjective reactivity suggest an underlying deficit in the patients' ability to express the felt arousal to funny cartoons. Our results encourage the application of both procedures in functional imaging studies for investigating the neural substrates of emotion dysregulation in MDD patients. Mood induction via cartoons appears to be superior to mood induction via faces and autobiographical material in uncovering specific emotional dysfunctions in MDD

    Emotional design and human-robot interaction

    Get PDF
    Recent years have shown an increase in the importance of emotions applied to the Design field - Emotional Design. In this sense, the emotional design aims to elicit (e.g., pleasure) or prevent (e.g., displeasure) determined emotions, during human product interaction. That is, the emotional design regulates the emotional interaction between the individual and the product (e.g., robot). Robot design has been a growing area whereby robots are interacting directly with humans in which emotions are essential in the interaction. Therefore, this paper aims, through a non-systematic literature review, to explore the application of emotional design, particularly on Human-Robot Interaction. Robot design features (e.g., appearance, expressing emotions and spatial distance) that affect emotional design are introduced. The chapter ends with a discussion and a conclusion.info:eu-repo/semantics/acceptedVersio

    The development of cross-cultural recognition of vocal emotion during childhood and adolescence

    Get PDF
    Humans have an innate set of emotions recognised universally. However, emotion recognition also depends on socio-cultural rules. Although adults recognise vocal emotions universally, they identify emotions more accurately in their native language. We examined developmental trajectories of universal vocal emotion recognition in children. Eighty native English speakers completed a vocal emotion recognition task in their native language (English) and foreign languages (Spanish, Chinese, and Arabic) expressing anger, happiness, sadness, fear, and neutrality. Emotion recognition was compared across 8-to-10, 11-to-13-year-olds, and adults. Measures of behavioural and emotional problems were also taken. Results showed that although emotion recognition was above chance for all languages, native English speaking children were more accurate in recognising vocal emotions in their native language. There was a larger improvement in recognising vocal emotion from the native language during adolescence. Vocal anger recognition did not improve with age for the non-native languages. This is the first study to demonstrate universality of vocal emotion recognition in children whilst supporting an “in-group advantage” for more accurate recognition in the native language. Findings highlight the role of experience in emotion recognition, have implications for child development in modern multicultural societies and address important theoretical questions about the nature of emotions

    As Far as the Eye Can See: Relationship between Psychopathic Traits and Pupil Response to Affective Stimuli

    Get PDF
    Psychopathic individuals show a range of affective processing deficits, typically associated with the interpersonal/affective component of psychopathy. However, previous research has been inconsistent as to whether psychopathy, within both offender and community populations, is associated with deficient autonomic responses to the simple presentation of affective stimuli. Changes in pupil diameter occur in response to emotionally arousing stimuli and can be used as an objective indicator of physiological reactivity to emotion. This study used pupillometry to explore whether psychopathic traits within a community sample were associated with hypo-responsivity to the affective content of stimuli. Pupil activity was recorded for 102 adult (52 female) community participants in response to affective (both negative and positive affect) and affectively neutral stimuli, that included images of scenes, static facial expressions, dynamic facial expressions and sound-clips. Psychopathic traits were measured using the Triarchic Psychopathy Measure. Pupil diameter was larger in response to negative stimuli, but comparable pupil size was demonstrated across pleasant and neutral stimuli. A linear relationship between subjective arousal and pupil diameter was found in response to sound-clips, but was not evident in response to scenes. Contrary to predictions, psychopathy was unrelated to emotional modulation of pupil diameter across all stimuli. The findings were the same when participant gender was considered. This suggests that psychopathy within a community sample is not associated with autonomic hypo-responsivity to affective stimuli, and this effect is discussed in relation to later defensive/appetitive mobilisation deficits

    The Body Action and Posture Coding System (BAP): Development and Reliability

    Get PDF
    Several methods are available for coding body movement in nonverbal behavior research, but there is no consensus on a reliable coding system that can be used for the study of emotion expression. Adopting an integrative approach, we developed a new method, the Body Action and Posture (BAP) coding system, for the time-aligned micro description of body movement on an anatomical level (different articulations of body parts), a form level (direction and orientation of movement), and a functional level (communicative and self-regulatory functions). We applied the system to a new corpus of acted emotion portrayals, examined its comprehensiveness and demonstrated intercoder reliability at three levels: a) occurrence, b) temporal precision and c) segmentation. We discuss issues for further validation and propose some research applications
    corecore